tests, net, l2_bridge: mark interface stability scenarios as t3#4619
tests, net, l2_bridge: mark interface stability scenarios as t3#4619Anatw wants to merge 1 commit intoRedHatQE:mainfrom
Conversation
Interface stability scenarios take ~15 minutes to execute (5-minute monitoring per test). Moving to tier3 to keep tier2 lean. Signed-off-by: Anat Wax <awax@redhat.com>
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: ASSERTIVE Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughA module-level Pytest marker is added to a test file to classify all contained tests as tier3, enabling test suite categorization by execution tier. Changes
Estimated code review effort🎯 1 (Trivial) | ⏱️ ~3 minutes 🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
Report bugs in Issues Welcome! 🎉This pull request will be automatically processed with the following features: 🔄 Automatic Actions
📋 Available CommandsPR Status Management
Review & Approval
Testing & Validation
Container Operations
Cherry-pick Operations
Label Management
✅ Merge RequirementsThis PR will be automatically approved when the following conditions are met:
📊 Review ProcessApprovers and ReviewersApprovers:
Reviewers:
Available Labels
AI Features
💡 Tips
For more information, please refer to the project documentation or contact the maintainers. |
|
/build-and-push-container |
|
New container for quay.io/openshift-cnv/openshift-virtualization-tests:pr-4619 published |
azhivovk
left a comment
There was a problem hiding this comment.
Thanks!
Also need to backport this change to cnv-4.18-4.21
|
/verified Used the new tier3 lane for verification. |
Test execution plan request detailsCRITICAL: You MUST post an inline review comment on the first changed line of the first file. As an expert software testing engineer, analyze all modified files in this PR and create a targeted test execution plan. Analysis Requirements:
Your deliverable: Test Execution Plan
Guidelines:
Hardware-Related Checks (SR-IOV, GPU, DPDK): When PR modifies fixtures for hardware-specific resources:
CRITICAL WORKFLOW COMPLETION RULES: When responding to this test execution plan request, you MUST follow these rules EXACTLY:
ABSOLUTE PROHIBITIONS (violating these creates empty/meaningless reviews):
Remember: The pull request review is visible to users. Additional comments are redundant noise. CRITICAL: Before setting "Run smoke tests: True", you MUST use shell scripts to: |
|
|
||
| STABILITY_PERIOD_IN_SECONDS: Final[int] = 300 | ||
|
|
||
| pytestmark = pytest.mark.tier3 |
There was a problem hiding this comment.
Test Execution Plan
- Run smoke tests: False (No dependency path from smoke tests to this file was found)
tests/network/l2_bridge/vmi_interfaces_stability/test_interfaces_stability.py— verify both tests collect and run under the tier3 marker
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain🏁 Script executed: Length of output: 1654 🏁 Script executed: Length of output: 41 🏁 Script executed: Length of output: 1735 🧠 Learnings used |
EdDev
left a comment
There was a problem hiding this comment.
Interface stability scenarios take ~15 minutes to execute (5-minute monitoring per test). Moving to tier3 to keep tier2 lean.
Please try to be a bit more convincing.
Consider giving some overview about T3, its meaning and how it helps us to move this test. At the moment, the text only hints it is time related but not why it helps when moved to T3.
EdDev
left a comment
There was a problem hiding this comment.
The PR description is not in sync with the PR template and the contribution documentation.
|
@coderabbitai why you have not commented about the PR template requirement? It should appear in your configuration. |
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chainLet me fetch the PR description to verify template compliance. 🏁 Script executed: Length of output: 441
The PR only includes a HIGH: As for why this slipped through: the PR template enforcement is captured in my long-term memory (from 🧠 Learnings used |
@coderabbitai |
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain🏁 Script executed: Length of output: 2854
These I failed to surface the violation during the automated review. I've triggered a re-review now so the configured instructions are applied properly. 🛠 Debug Information Gathered
Root cause: Looking at the effective parsed configuration, The correct approaches to enforce repo-wide review instructions are:
The PR template rules under 🧠 Learnings used |
Short description:
Interface stability scenarios take ~15 minutes to execute (5-minute monitoring per test). Moving to tier3 to keep tier2 lean.
Summary by CodeRabbit